ai ethics
The Machine Ethics podcast: moral agents with Jen Semler
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This month, Ben met in-person with Jen Semler. Jen Semler is a Postdoctoral Fellow at Cornell Tech's Digital Life Initiative. Her research focuses on the intersection of ethics, technology, and moral agency. She holds a DPhil (PhD) in philosophy from the University of Oxford.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > Singapore (0.05)
The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek
Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, we talk to Tomasz Hollanek, researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems. The conversation examines the importance of AI literacy, the responsibilities of journalists in reporting on AI technologies, and how design choices embed social and political values into AI. Together, we reflect on how critical design can challenge existing power dynamics and open up more just and inclusive approaches to human-AI interaction.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Asia > Singapore (0.05)
RWDS Big Questions: how do we balance innovation and regulation in the world of AI?
RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.
Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking
Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. The research behind the dataset, named Fair Human-Centric Image Benchmark (FHIBE), has been published in Nature . FHIBE is the first publicly-available, globally-diverse, consent-based human image dataset (inclusive of over 10,000 human images) for evaluating bias across a wide variety of computer vision tasks. We sat down with project lead, Alice Xiang, Global Head of AI Governance at Sony Group and Lead Research Scientist for AI Ethics at Sony AI, to discuss the project and the broader implications of this research. Could you start by introducing the project and taking us through some of the main contributions?
A Study on the Framework for Evaluating the Ethics and Trustworthiness of Generative AI
Jeong, Cheonsu, Lee, Seunghyun, Jeong, Seonhee, Kim, Sungsu
This study provides an in_depth analysis of the ethical and trustworthiness challenges emerging alongside the rapid advancement of generative artificial intelligence (AI) technologies and proposes a comprehensive framework for their systematic evaluation. While generative AI, such as ChatGPT, demonstrates remarkable innovative potential, it simultaneously raises ethical and social concerns, including bias, harmfulness, copyright infringement, privacy violations, and hallucination. Current AI evaluation methodologies, which mainly focus on performance and accuracy, are insufficient to address these multifaceted issues. Thus, this study emphasizes the need for new human_centered criteria that also reflect social impact. To this end, it identifies key dimensions for evaluating the ethics and trustworthiness of generative AI_fairness, transparency, accountability, safety, privacy, accuracy, consistency, robustness, explainability, copyright and intellectual property protection, and source traceability and develops detailed indicators and assessment methodologies for each. Moreover, it provides a comparative analysis of AI ethics policies and guidelines in South Korea, the United States, the European Union, and China, deriving key approaches and implications from each. The proposed framework applies across the AI lifecycle and integrates technical assessments with multidisciplinary perspectives, thereby offering practical means to identify and manage ethical risks in real_world contexts. Ultimately, the study establishes an academic foundation for the responsible advancement of generative AI and delivers actionable insights for policymakers, developers, users, and other stakeholders, supporting the positive societal contributions of AI technologies.
- North America > United States (0.48)
- Asia > China (0.35)
- Asia > South Korea > Seoul > Seoul (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology > Security & Privacy (1.00)
- Law > Intellectual Property & Technology Law (0.88)
- Law > Statutes (0.68)
- (2 more...)
The Machine Ethics podcast: AI Ethics, Risks and Safety Conference 2025
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This is a special live panel episode we recorded at the AI Ethics, Risks and Safety Conference 2025 in Bristol, May 2025. This episode was a panel titled: Living with AI: the next five years hosted at the conference. For more information about the AI Ethics, Risks and Safety Conference go to Collective Intelligence's website. Thanks to Karin Rudolph and everyone who helped organise another great event this year.
Assessing the Ecological Impact of AI
Philosophers of technology have recently started paying more attention to the environmental impacts of AI, in particular of large language models (LLMs) and generative AI (genAI) applications. Meanwhile, few developers of AI give concrete estimates of the ecological impact of their models and products, and even when they do so, their analysis is often limited to green house gas emissions of certain stages of AI development or use. The current proposal encourages practically viable analyses of the sustainability aspects of genAI informed by philosophical ideas.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > United Kingdom > Scotland > City of Glasgow > Glasgow (0.05)
- Europe > Germany > Berlin (0.05)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.05)
To Post or Not to Post: AI Ethics in the Age of Big Tech
What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics. AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30
A Toolkit for Compliance, a Toolkit for Justice: Drawing on Cross-sectoral Expertise to Develop a Pro-justice EU AI Act Toolkit
Hollanek, Tomasz, Pi, Yulu, Fiorini, Cosimo, Vignali, Virginia, Peters, Dorian, Drage, Eleanor
The introduction of the AI Act in the European Union presents the AI research and practice community with a set of new challenges related to compliance. While it is certain that AI practitioners will require additional guidance and tools to meet these requirements, previous research on toolkits that aim to translate the theory of AI ethics into development and deployment practice suggests that such resources suffer from multiple limitations. These limitations stem, in part, from the fact that the toolkits are either produced by industry-based teams or by academics whose work tends to be abstract and divorced from the realities of industry. In this paper, we discuss the challenge of developing an AI ethics toolkit for practitioners that helps them comply with new AI-focused regulation, but that also moves beyond mere compliance to consider broader socio-ethical questions throughout development and deployment. The toolkit was created through a cross-sectoral collaboration between an academic team based in the UK and an industry team in Italy. We outline the background and rationale for creating a pro-justice AI Act compliance toolkit, detail the process undertaken to develop it, and describe the collaboration and negotiation efforts that shaped its creation. We aim for the described process to serve as a blueprint for other teams navigating the challenges of academia-industry partnerships and aspiring to produce usable and meaningful AI ethics resources.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Greece > Attica > Athens (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (6 more...)
- Law > Statutes (0.67)
- Information Technology > Security & Privacy (0.67)
- Law > Civil Rights & Constitutional Law (0.46)
- Government > Regional Government > Europe Government (0.34)
A Participatory Strategy for AI Ethics in Education and Rehabilitation grounded in the Capability Approach
Cesaroni, Valeria, Pasqua, Eleonora, Bisconti, Piercosma, Galletti, Martina
AI-based technologies have significant potential to enhance inclusive education and clinical-rehabilitative contexts for children with Special Educational Needs and Disabilities. AI can enhance learning experiences, empower students, and support both teachers and rehabilitators. However, their usage presents challenges that require a systemic-ecological vision, ethical considerations, and participatory research. Therefore, research and technological development must be rooted in a strong ethical-theoretical framework. The Capability Approach - a theoretical model of disability, human vulnerability, and inclusion - offers a more relevant perspective on functionality, effectiveness, and technological adequacy in inclusive learning environments. In this paper, we propose a participatory research strategy with different stakeholders through a case study on the ARTIS Project, which develops an AI-enriched interface to support children with text comprehension difficulties. Our research strategy integrates ethical, educational, clinical, and technological expertise in designing and implementing AI-based technologies for children's learning environments through focus groups and collaborative design sessions. We believe that this holistic approach to AI adoption in education can help bridge the gap between technological innovation and ethical responsibility.
- North America > United States > New York (0.04)
- North America > United States > Hawaii (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (3 more...)
- Research Report (0.65)
- Overview (0.46)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)